Ridge Regression
   HOME

TheInfoList



OR:

Ridge regression is a method of estimating the coefficients of multiple- regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also known as Tikhonov regularization, named for Andrey Tikhonov, it is a method of regularization of
ill-posed problem The mathematical term well-posed problem stems from a definition given by 20th-century French mathematician Jacques Hadamard. He believed that mathematical models of physical phenomena should have the properties that: # a solution exists, # the sol ...
s. It is particularly useful to mitigate the problem of
multicollinearity In statistics, multicollinearity (also collinearity) is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted from the others with a substantial degree of accuracy. In this situation, the coeffic ...
in linear regression, which commonly occurs in models with large numbers of parameters. In general, the method provides improved efficiency in parameter estimation problems in exchange for a tolerable amount of
bias Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group ...
(see
bias–variance tradeoff In statistics and machine learning, the bias–variance tradeoff is the property of a model that the variance of the parameter estimated across samples can be reduced by increasing the bias in the estimated parameters. The bias–variance di ...
). The theory was first introduced by Hoerl and Kennard in 1970 in their ''
Technometrics Technometrics is a journal of statistics for the physical, chemical, and engineering sciences, published quarterly since 1959 by the American Society for Quality and the American Statistical Association. Statement of purpose The purpose of ''Tech ...
'' papers “RIDGE regressions: biased estimation of nonorthogonal problems” and “RIDGE regressions: applications in nonorthogonal problems”. This was the result of ten years of research into the field of ridge analysis. Ridge regression was developed as a possible solution to the imprecision of least square estimators when linear regression models have some multicollinear (highly correlated) independent variables—by creating a ridge regression estimator (RR). This provides a more precise ridge parameters estimate, as its variance and mean square estimator are often smaller than the least square estimators previously derived.


Overview

In the simplest case, the problem of a near-singular
moment matrix In mathematics, a moment matrix is a special symmetric square matrix whose rows and columns are indexed by monomials. The entries of the matrix depend on the product of the indexing monomials only (cf. Hankel matrices.) Moment matrices play an im ...
(\mathbf^\mathsf\mathbf) is alleviated by adding positive elements to the diagonals, thereby decreasing its condition number. Analogous to the
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the ...
estimator, the simple ridge estimator is then given by :\hat_ = (\mathbf^ \mathbf + \lambda \mathbf)^ \mathbf^ \mathbf where \mathbf is the regressand, \mathbf is the
design matrix In statistics and in particular in regression analysis, a design matrix, also known as model matrix or regressor matrix and often denoted by X, is a matrix of values of explanatory variables of a set of objects. Each row represents an individual ob ...
, \mathbf is the identity matrix, and the ridge parameter \lambda \geq 0 serves as the constant shifting the diagonals of the moment matrix. It can be shown that this estimator is the solution to the least squares problem subject to the constraint \beta^\mathsf\beta = c, which can be expressed as a Lagrangian: :\min_ \, (\mathbf - \mathbf \beta)^\mathsf(\mathbf - \mathbf \beta) + \lambda (\beta^\mathsf\beta - c) which shows that \lambda is nothing but the
Lagrange multiplier In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints (i.e., subject to the condition that one or more equations have to be satisfied e ...
of the constraint. Typically, \lambda is chosen according to a heuristic criterion, so that the constraint will not be satisfied exactly. Specifically in the case of \lambda = 0, in which the constraint is non-binding, the ridge estimator reduces to
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the ...
. A more general approach to Tikhonov regularization is discussed below.


History

Tikhonov regularization has been invented independently in many different contexts. It became widely known from its application to integral equations from the work of Andrey Tikhonov and David L. Phillips. Some authors use the term Tikhonov–Phillips regularization. The finite-dimensional case was expounded by Arthur E. Hoerl, who took a statistical approach, and by Manus Foster, who interpreted this method as a Wiener–Kolmogorov (Kriging) filter. Following Hoerl, it is known in the statistical literature as ridge regression, named after the shape along the diagonal of the identity matrix.


Tikhonov regularization

Suppose that for a known matrix A and vector \mathbf, we wish to find a vector \mathbf such that : A\mathbf = \mathbf. The standard approach is
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the ...
linear regression. However, if no \mathbf satisfies the equation or more than one \mathbf does—that is, the solution is not unique—the problem is said to be ill posed. In such cases, ordinary least squares estimation leads to an overdetermined, or more often an underdetermined system of equations. Most real-world phenomena have the effect of low-pass filters in the forward direction where A maps \mathbf to \mathbf. Therefore, in solving the inverse-problem, the inverse mapping operates as a
high-pass filter A high-pass filter (HPF) is an electronic filter that passes signals with a frequency higher than a certain cutoff frequency and attenuates signals with frequencies lower than the cutoff frequency. The amount of attenuation for each frequency ...
that has the undesirable tendency of amplifying noise ( eigenvalues / singular values are largest in the reverse mapping where they were smallest in the forward mapping). In addition, ordinary least squares implicitly nullifies every element of the reconstructed version of \mathbf that is in the null-space of A, rather than allowing for a model to be used as a prior for \mathbf. Ordinary least squares seeks to minimize the sum of squared residuals, which can be compactly written as : \, A\mathbf - \mathbf\, _2^2, where \, \cdot\, _2 is the Euclidean norm. In order to give preference to a particular solution with desirable properties, a regularization term can be included in this minimization: : \, A\mathbf - \mathbf\, _2^2 + \, \Gamma \mathbf\, _2^2 for some suitably chosen Tikhonov matrix \Gamma . In many cases, this matrix is chosen as a scalar multiple of the identity matrix (\Gamma = \alpha I), giving preference to solutions with smaller norms; this is known as regularization. In other cases, high-pass operators (e.g., a
difference operator In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
or a weighted Fourier operator) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous. This regularization improves the conditioning of the problem, thus enabling a direct numerical solution. An explicit solution, denoted by \hat, is given by : \hat = (A^\top A + \Gamma^\top \Gamma)^ A^\top \mathbf. The effect of regularization may be varied by the scale of matrix \Gamma. For \Gamma = 0 this reduces to the unregularized least-squares solution, provided that (ATA)−1 exists. regularization is used in many contexts aside from linear regression, such as classification with
logistic regression In statistics, the logistic model (or logit model) is a statistical model that models the probability of an event taking place by having the log-odds for the event be a linear combination of one or more independent variables. In regression a ...
or support vector machines, and matrix factorization.


Generalized Tikhonov regularization

For general multivariate normal distributions for x and the data error, one can apply a transformation of the variables to reduce to the case above. Equivalently, one can seek an x to minimize : \, Ax - b\, _P^2 + \, x - x_0\, _Q^2, where we have used \, x\, _Q^2 to stand for the weighted norm squared x^\top Q x (compare with the Mahalanobis distance). In the Bayesian interpretation P is the inverse covariance matrix of b, x_0 is the expected value of x, and Q is the inverse covariance matrix of x. The Tikhonov matrix is then given as a factorization of the matrix Q = \Gamma^\top \Gamma (e.g. the
Cholesky factorization In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for effici ...
) and is considered a whitening filter. This generalized problem has an optimal solution x^* which can be written explicitly using the formula : x^* = (A^\top PA + Q)^ (A^\top Pb + Qx_0), or equivalently : x^* = x_0 + (A^\top PA + Q)^ (A^\top P(b - Ax_0)).


Lavrentyev regularization

In some situations, one can avoid using the transpose A^\top, as proposed by Mikhail Lavrentyev. For example, if A is symmetric positive definite, i.e. A = A^\top > 0, so is its inverse A^, which can thus be used to set up the weighted norm squared \, x\, _P^2 = x^\top A^ x in the generalized Tikhonov regularization, leading to minimizing : \, Ax - b\, _^2 + \, x - x_0\, _Q^2 or, equivalently up to a constant term, : x^\top (A+Q)x - 2 x^\top (b + Qx_0). This minimization problem has an optimal solution x^* which can be written explicitly using the formula : x^* = (A + Q)^ (b + Qx_0), which is nothing but the solution of the generalized Tikhonov problem where A = A^\top =P^. The Lavrentyev regularization, if applicable, is advantageous to the original Tikhonov regularization, since the Lavrentyev matrix A + Q can be better conditioned, i.e., have a smaller condition number, compared to the Tikhonov matrix A^\top A + \Gamma^\top \Gamma.


Regularization in Hilbert space

Typically discrete linear ill-conditioned problems result from discretization of
integral equation In mathematics, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form: f(x_1,x_2,x_3,...,x_n ; u(x_1,x_2,x_3,...,x_n) ...
s, and one can formulate a Tikhonov regularization in the original infinite-dimensional context. In the above we can interpret A as a
compact operator In functional analysis, a branch of mathematics, a compact operator is a linear operator T: X \to Y, where X,Y are normed vector spaces, with the property that T maps bounded subsets of X to relatively compact subsets of Y (subsets with compact c ...
on Hilbert spaces, and x and b as elements in the domain and range of A. The operator A^* A + \Gamma^\top \Gamma is then a self-adjoint bounded invertible operator.


Relation to singular-value decomposition and Wiener filter

With \Gamma = \alpha I, this least-squares solution can be analyzed in a special way using the singular-value decomposition. Given the singular value decomposition :A = U \Sigma V^\top with singular values \sigma _i, the Tikhonov regularized solution can be expressed as :\hat = V D U^\top b, where D has diagonal values :D_ = \frac and is zero elsewhere. This demonstrates the effect of the Tikhonov parameter on the condition number of the regularized problem. For the generalized case, a similar representation can be derived using a generalized singular-value decomposition. Finally, it is related to the
Wiener filter In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant ( LTI) filtering of an observed noisy process, assuming known stationary signal and noise spectra, and ...
: :\hat = \sum _^q f_i \frac v_i, where the Wiener weights are f_i = \frac and q is the
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * ...
of A.


Determination of the Tikhonov factor

The optimal regularization parameter \alpha is usually unknown and often in practical problems is determined by an ''ad hoc'' method. A possible approach relies on the Bayesian interpretation described below. Other approaches include the discrepancy principle, cross-validation, L-curve method,
restricted maximum likelihood In statistics, the restricted (or residual, or reduced) maximum likelihood (REML) approach is a particular form of maximum likelihood estimation that does not base estimates on a maximum likelihood fit of all the information, but instead uses a like ...
and unbiased predictive risk estimator.
Grace Wahba Grace Goldsmith Wahba (born August 3, 1934) is an American statistician and now-retired I. J. Schoenberg-Hilldale Professor of Statistics at the University of Wisconsin–Madison. She is a pioneer in methods for smoothing noisy data. Best known f ...
proved that the optimal parameter, in the sense of
leave-one-out cross-validation Cross-validation, sometimes called rotation estimation or out-of-sample testing, is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set. Cross- ...
minimizes :G = \frac = \frac, where \operatorname is the residual sum of squares, and \tau is the effective number of degrees of freedom. Using the previous SVD decomposition, we can simplify the above expression: :\operatorname = \left\, y - \sum_^q (u_i' b) u_i \right\, ^2 + \left\, \sum _^q \frac (u_i' b) u_i \right\, ^2, :\operatorname = \operatorname_0 + \left\, \sum_^q \frac (u_i' b) u_i \right\, ^2, and :\tau = m - \sum_^q \frac = m - q + \sum_^q \frac.


Relation to probabilistic formulation

The probabilistic formulation of an
inverse problem An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source reconstruction in acoustics, or calculating the ...
introduces (when all uncertainties are Gaussian) a covariance matrix C_M representing the ''a priori'' uncertainties on the model parameters, and a covariance matrix C_D representing the uncertainties on the observed parameters. In the special case when these two matrices are diagonal and isotropic, C_M = \sigma_M^2 I and C_D = \sigma_D^2 I , and, in this case, the equations of inverse theory reduce to the equations above, with \alpha = / .


Bayesian interpretation

Although at first the choice of the solution to this regularized problem may look artificial, and indeed the matrix \Gamma seems rather arbitrary, the process can be justified from a Bayesian point of view. Note that for an ill-posed problem one must necessarily introduce some additional assumptions in order to get a unique solution. Statistically, the
prior probability In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into ...
distribution of x is sometimes taken to be a
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One ...
. For simplicity here, the following assumptions are made: the means are zero; their components are independent; the components have the same standard deviation \sigma _x. The data are also subject to errors, and the errors in b are also assumed to be
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independ ...
with zero mean and standard deviation \sigma _b. Under these assumptions the Tikhonov-regularized solution is the most probable solution given the data and the ''a priori'' distribution of x, according to Bayes' theorem. If the assumption of normality is replaced by assumptions of
homoscedasticity In statistics, a sequence (or a vector) of random variables is homoscedastic () if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. Th ...
and uncorrelatedness of errors, and if one still assumes zero mean, then the
Gauss–Markov theorem In statistics, the Gauss–Markov theorem (or simply Gauss theorem for some authors) states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the ...
entails that the solution is the minimal unbiased linear estimator.


See also

* LASSO estimator is another regularization method in statistics. * Elastic net regularization * Matrix regularization


Notes


References


Further reading

* * * * * {{Authority control Linear algebra Estimation methods Inverse problems Regression analysis